Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[draft] add llamafile 🦙📁 #866

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

not-lain
Copy link
Contributor

llamafile is a local application for running distributed LLM using a single file, developed by jart in collaboration with the mozilla team.
you can already filter by llamafile under hf.co/models under the libraries section, so I thought this is a good time to add the code snippets for the library.

the app handles

  • .gguf files
  • .llamafile files 🦙📁

This pr tackles only .gguf files, leaving .llamafile for another pr because the current huggingface.js does not handle {{LLAMAFILE_FILE}} yet (I think)

I barely know any JS, so I might need some expert help with this PR.

an example of a code snippet as provided by the maintainer of the library could be used as such :

wget https://github.com/Mozilla-Ocho/llamafile/releases/download/0.8.13/llamafile-0.8.13
wget https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q6_K.gguf
chmod +x llamafile-0.8.13
./llamafile-0.8.13 -m tinyllama-1.1b-chat-v1.0.Q6_K.gguf -p 'four score and'

As for the .llamafile snippet I have reported it internally to our chief 🦙 officer @osanseviero in here (private dm)

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @not-lain, copying our message from slack here:

A good idea would be to add llamafile as a LocalApp instead of a library.
(that's also how llama.cpp/ llama-cpp-python) are added as well.

So what I'd recommend is that you open a PR to LocalApps: https://github.com/huggingface/huggingface.js/blob/1de39598a231afab805e252d2b27e0ec56a1897a/packages/tasks/src/local-apps.ts#L142

This way we can add support for both GGUFs + llamafiles on the same PR and people would be able to opt-in to the local app as well. Makes it more use-ful for the people.

Would you be keen on opening a PR for that?

const command = (binary: string) =>
[
"# Load and run the model :",
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat
`wget https://huggingface.co/${model.id}/resolve/main/${filepath ?? '{{GGUF_FILE}}'}`,

Would something like that work? (not sure)

"# Load and run the model :",
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat
`chmod +x ${binary}`,
`${binary} -m ${filepath?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant' `, // will this create a second dropdown ?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`${binary} -m ${filepath?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant' `, // will this create a second dropdown ?
`${binary} -m ${filepath ?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant'`, // will this create a second dropdown ?

docsUrl : "https://github.com/Mozilla-Ocho/llamafile",
mainTask : "text-generation",
displayOnModelPage : isLlamaCppGgufModel, // update this later to include .llamafile
snippet: snippetLlamafileGGUF ,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
snippet: snippetLlamafileGGUF ,
snippet: snippetLlamafileGGUF,

].join("\n");
return [
{
title: "Use pre-built binary",
Copy link
Member

@ngxson ngxson Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this step is correct in llamafile, because the downloaded model file is already a pre-built binary.

The release binary from https://github.com/Mozilla-Ocho/llamafile/releases is used in Creating llamafiles section of the README. So this step should be named Create your own llamafile (or we could also remove this section, and just leave a link to README)

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ngxson @pcuenca - this is just a placeholder PR (so no need for a thorough review just yet), with @not-lain we'd have a new PR for Local Apps soon. I'll add you as reviewers when it's up!

@Vaibhavs10 Vaibhavs10 changed the title add llamafile 🦙📁 [draft] add llamafile 🦙📁 Aug 27, 2024
@not-lain
Copy link
Contributor Author

cc @pcuenca and @ngxson
I have opened a new issue with all the necessary info that are related to llamafile at #871 .
I acknowledge my shortcomings when it comes to Javascript, and I hope you will consider this PR as a beginner reference for developing your own PRs.
I will be abandoning this work but I will keep this open just in case other people want to join in too.
All the best !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants